Goto

Collaborating Authors

 mean std


Theta-regularized Kriging: Modelling and Algorithms

Xie, Xuelin, Lu, Xiliang

arXiv.org Machine Learning

To obtain more accurate model parameters and improve prediction accuracy, we proposed a regularized Kriging model that penalizes the hyperparameter theta in the Gaussian stochastic process, termed the Theta-regularized Kriging. We derived the optimization problem for this model from a maximum likelihood perspective. Additionally, we presented specific implementation details for the iterative process, including the regularized optimization algorithm and the geometric search cross-validation tuning algorithm. Three distinct penalty methods, Lasso, Ridge, and Elastic-net regularization, were meticulously considered. Meanwhile, the proposed Theta-regularized Kriging models were tested on nine common numerical functions and two practical engineering examples. The results demonstrate that, compared with other penalized Kriging models, the proposed model performs better in terms of accuracy and stability.


Between Resolution Collapse and Variance Inflation: Weighted Conformal Anomaly Detection in Low-Data Regimes

Hennhöfer, Oliver, Preisach, Christine

arXiv.org Machine Learning

Standard conformal anomaly detection provides marginal finite-sample guarantees under the assumption of exchangeability . However, real-world data often exhibit distribution shifts, necessitating a weighted conformal approach to adapt to local non-stationarity. We show that this adaptation induces a critical trade-off between the minimum attainable p-value and its stability. As importance weights localize to relevant calibration instances, the effective sample size decreases. This can render standard conformal p-values overly conservative for effective error control, while the smoothing technique used to mitigate this issue introduces conditional variance, potentially masking anomalies. We propose a continuous inference relaxation that resolves this dilemma by decoupling local adaptation from tail resolution via continuous weighted kernel density estimation. While relaxing finite-sample exactness to asymptotic validity, our method eliminates Monte Carlo variability and recovers the statistical power lost to discretization. Empirical evaluations confirm that our approach not only restores detection capabilities where discrete baselines yield zero discoveries, but outperforms standard methods in statistical power while maintaining valid marginal error control in practice.



Aligning Gradient and Hessian for Neural Signed Distance Function

Neural Information Processing Systems

Our motivation is grounded in a fundamental observation: aligning the gradient and the Hessian of the SDF provides a more efficient mechanism to govern gradient directions.



Table3: HumanActivity

Neural Information Processing Systems

As reviewers noticed, detailed description of the experiments is provided in the3 supplement. Wealso6 switched to a finer discretization of 1 minute on Physionet dataset, instead of 6 minutes. Modeling state-interactions via26 an ODE allows better generalization outside the training interval compared to directly modeling a function of time.27 Using an ODE-RNN as a decoder is a possible extension.


ContinuousCategoriesDiscovery

Neural Information Processing Systems

We refer to it as theContinuous Category Discovery(CCD) problem, which is significantly more challenging than the static setting.